Search Results: "mattb"

23 September 2006

Matt Brown: FreeRadius patch superfluous

Yesterday’s patch for FreeRadius turns out to be superfluous, as the functionality is already present, its just undocumented! I submitted the patch to the FreeRadius bug tracking system (#392) and got back a quick reply from Alan DeKok saying the following:
It isn’t well documented, but it’s already supported, via the
EAP-TLS-Require-Client-Cert attribute. This allows you to have
the cert requirement on a per-realm, or per-user basis.
Oh well, at least the patch didn’t take too long to write! I had seen the code that handles the EAP-TLS-Require-Client-Cert attribute, but I couldn’t find any references to it elsewhere in the daemon, so I assumed it was a fragment that was unused and ignored it. Moral of the story: Assume less and spend more time understand the code you’re patching!

22 September 2006

Matt Brown: Requiring client certificates for EAP-TTLS with FreeRadius

For the project that I’m dealing with at work I wanted to be able to authenticate devices in a two stage process. Stage 1 should authenticate the device to the network (via an x509 certificate) and then Stage 2 should authenticate the user who possesses the device with a username and password. Unfortunately none of the methods shipped by default with FreeRadius will support this sort of configuration. A quick skim through the FreeRadius source code revealed that it wouldn’t be too hard to add support for requiring client certificates with EAP-TTLS and EAP-PEAP. The following patch adds a new configuration option to the tls section of eap.conf which if set to true will require the client to present a certificate before authentication will succeed. Example eap.conf
eap  
                default_eap_type = ttls
                timer_expire     = 60
                ignore_unknown_eap_types = no
                cisco_accounting_username_bug = no
                md5  
                 
                tls  
                        private_key_file = $ raddbdir /certs/radius.key
                        certificate_file = $ raddbdir /certs/radius.pem
                        CA_file = $ raddbdir /certs/cacert.pem
                        dh_file = $ raddbdir /certs/dh
                        random_file = $ raddbdir /certs/random
                        # This is the new option
                        # If set to no, or missing, client certificates are
                        # not required for EAP-TTLS or EAP-PEAP
                        require_client_cert = yes
                  
                 ttls  
                        default_eap_type = md5
                 
         

8 September 2006

Matt Brown: $16m of Broadband Challenge Funding Announced

David Cunliffe has just announced this morning that the Government has finally decided to approve 5 of the urban Broadband Challenge applications to build urban fibre networks. Hamilton is one of the accepted proposals, getting $3.3m of government funding to help cover the start-up costs of the network. Under the conditions of the Broadband Challenge the networks must be open access and provide a full duplex transfer rate of at least 1Gbps. Should be interesting to see how this develops! Press Releases:

5 August 2006

Matt Brown: Working sound on an HP dc7600

Back in April I mentioned the fun and games I had trying to get a nice dual-DVI setup working with my new HP dc7600. The other problem that I’ve been experiencing since then is a complete inability to play any audio. The machine has an Intel ICH7 High Definition audio chipset onboard which is theoretically supported fine by the snd-hda-intel kernel module, and indeed everything loads fine and shows up in /proc correctly. However nothing ever comes out the speakers! lspci output below
0000:00:1b.0 Audio device: Intel Corporation 82801G (ICH7 Family) High Definition Audio Controller (rev 01)
Subsystem: Hewlett-Packard Company: Unknown device 3010
Flags: bus master, fast devsel, latency 0, IRQ 21
Memory at e0a00000 (64-bit, non-prefetchable) [size=16K]
Capabilities:
Today I sat down and tried to work out what was going on, it turns out the Realtek ALC260 chipset that is used on this motherboard supports quite a few different pinouts, so while snd-hda-intel knows how to drive it you don’t actually get any sound unless you have the correct mapping setup! The driver attempts to do this automatically for a number of different motherboards and there is a entry in the table for my particular motherboard (Subsystem 0×3010) mapping it to the ALC260_HP patching, but still nothing works! Eventually I stumbled across ALSA Bug #2157 which fixes an identical problem for a different motherboard (Subsystem 0×3012) by using the ALC260_HP_3013 mapping in the driver. So, taking a guess, I made the same change for the 0×3010 mapping, rebooted, and voila, I have sound. Patch below:
--- sound/pci/hda/patch_realtek.c.orig 2006-08-05 21:13:49.000000000 +1200
+++ sound/pci/hda/patch_realtek.c 2006-08-05 20:33:17.000000000 +1200
@@ -2951,7 +2951,7 @@
.pci_subvendor = 0x152d, .pci_subdevice = 0x0729,
.config = ALC260_BASIC , /* CTL Travel Master U553W */
.modelname = "hp", .config = ALC260_HP ,
- .pci_subvendor = 0x103c, .pci_subdevice = 0x3010, .config = ALC260_HP ,
+ .pci_subvendor = 0x103c, .pci_subdevice = 0x3010, .config = ALC260_HP_3013 ,
.pci_subvendor = 0x103c, .pci_subdevice = 0x3011, .config = ALC260_HP ,
.pci_subvendor = 0x103c, .pci_subdevice = 0x3012, .config = ALC260_HP ,
.pci_subvendor = 0x103c, .pci_subdevice = 0x3013, .config = ALC260_HP_3013 ,

30 July 2006

Matt Brown: Telecom Hasn t Changed

As I predicted on the 27 of June Telecom hasn’t changed a single bit. All their rhetoric about embracing the new competitive environment and voluntarily separating themselves has been posturing and PR as evidenced by the following article in this mornings Computerworld. Basically Telecom is saying Xtra is a brand, not a separate business unit, so it will continue to receive services in a different (favourable) manner to every other ISP. The only hope now is for the Government to actually muster up the courage to enforce a physical separation on Telecom… We live in hope.

24 July 2006

Matt Brown: Telecom Billing Scam!?

Is thunderbird smarter than it thinks? Thunderbird thinks this email is a scam!
Click image to enlarge.

17 July 2006

Matt Brown: Patching the sc520 watchdog to reboot Soekris devices

We use a lot of Soekris devices at work which often get deployed to remote locations where its not easy to access them if things go wrong. I’ve recently been writing some software to act as a watchdog on the machine. As a very last resort this software makes use of the watchdog chipset provided by the Soekris boards so that if userspace goes away the machine will get rebooted back into what is hopefully a sane state. Unfortunately the in kernel sc520 watchdog device doesn’t actually reboot the Soekris when the heartbeat ping from userspace is lost. This makes it virtually useless. There has been a patch floating around since 2004 that fixes this problem, but it hasn’t made it in to the kernel for one reason or another. I think it’s Soekris specific… I’ve updated the patch to apply cleanly against the sc520 driver found in 2.6.16 and you can find it below. http://www.mattb.net.nz/patches/linux/soekris-sc520wd.patch

27 June 2006

Matt Brown: Same old tricks from Telecom?

A press release from Telecom just landed in my inbox: TELECOM TO SEPARATE WHOLESALE AND RETAIL OPERATIONS. From what I can make out, Telecom is announcing that they’re going to ‘voluntarily’ do what the Government’s proposed legislation would force them to do anyway. That is, separate the accounting and business processes of their wholesale and retail departments. It’s not a full structural separation (which would involve different shareholders and upper management teams). It’s still a good thing, but the way the press release is worded makes me think Telecom is trying to confuse the public into thinking that a full structural separation has occured.

21 June 2006

Matt Brown: Updated Lightweight Archive Scripts

I had a bit of a surprise today when I sat down to read this weeks DWN and found my own name and a link to and old post about some Lightweight Debian Archive Scripts that I had written in October last year staring back at me from the second sentence! As chance would have it, just last week I updated these scripts to use reprepro, as suggested in many of the comments that I received after writing the last post. I hadn’t got around to making the new scripts public yet, so I guess the link from DWN is a sign of sorts! My main motivation for updating the scripts to use reprepro was to give myself the ability to have a testing, and stable archive as recently we’ve been dealing with .debs manually for testing as we didn’t want to put them into the archive for fear of prematurely updating the production hosts with untested code. I’d heard that reprepro had some nice support for moving packages between local repositories which sounded exactly like what I wanted. Reprepro was indeed fairly easy to setup and install. I backported reprepro 0.9.1 and installed it alongside my existing sbuild setup as documented in the original post. The changes to the scripts were fairly minimal, although again, they assume some things for my environment, and its likely that they’ll need some modification before being useful to you: You can grab the updated scripts and my reprepro configuration from: The new setup is great and I’m very happy with reprepro and the new archive. The only thing I haven’t had time to look into properly yet is how to pull specific packages from testing into stable, but not everything. Ideally I would be able to run a command like
reprepro <stable-dist> pull <pkgname>
If I could get that working then it would be awesome.

18 June 2006

Matt Brown: SQLite Support for dbconfig-common

Over the past few weeks I’ve been working on extending the dbconfig-common packages to support the SQLite database format. My primary motivation for this is so that I can start using dbconfig-common to manage the database(s) for the PHPwiki packages which currently only support SQLite out of the box. The main changes that were required were to separate out the debconf questions that are only relevant for remote and authenticated database types. These changes were committed yesterday. I plan to make the remaining changes required to bring the SQLite support up to scratch over the next few days, so hopefully dbconfig-common 1.8.18 with SQLite support will be uploaded before too long. NM Application
In other news, Alexander Wirt finished off my NM Report, so now I’m waiting for the front desk to review it and send me on to the DAM. It’s great to be making progress and a big thank you to Alexander for taking me through the process, testing my knowledge and sponsoring my packages.

5 June 2006

Matt Brown: The importance of truly Open Standards

I think the current brouhaha surrounding PDF functionality in Office 2007 is an excellent object lesson in why software patents shouldn’t be allowed anywhere near an Open Standard (and software in general!). I’m not sure if a patent is the tool of choice that Adobe is wielding in this particular example, but even if it’s not Patent’s present a large risk to Open Standards and should be resisted. It’s the same as the distinction between a Shared Source program and Open Source or Free Software. You may have the source code for a shared source program and be able to see how it is implemented and how it operates, but there are restrictions preventing you from doing anything with that information. Likewise a standard shouldn’t be called Open simply because you have details on how it is implemented (although that is undoubtably a critical part of openness). An Open Standard should be one which is openly specified and completely free of any other encumberances (such as patents). Unfortunately I don’t see any of the standards bodies making this a priority. My understanding of the IETF RFC process is that you have to file a statement stating any IP claims but you’re not prevented from standardising somethign that you hold a patent on. W3C and the IEEE seem slightly less prone to this problem as they tend to have committee oriented standards process that require more interaction, but I’m still not aware of any direct efforts to prevent parts of the resulting standard from becoming patent encumbered. If the lack of truly Open Standards can bite Microsoft it can bit any of us. It’s not an issue we should be ignoring.

17 May 2006

Matt Brown: Playing With ZoomIn

I signed up for a ZoomIn API Key (not that it seems to be needed atm…) the other week and finally got a chance to have a play this past weekend. My test case was to build a map of all the CRCnet sites and the links between them to use as a plugin in the CRCnet Configuration System. Either as a dynamic status display (colouring links to show traffic loads and status etc) or as a network planning tool to get a feel for the relationship between existing sites and potential new locations. Obviously the lack of topographical information makes the second case far less useful than it could be, but I think even in 2D it would still be a useful tool. Getting the points onto the map was relatively straightforward, but adding any sort of hover event to them was another matter. The GEvent class currently only supports the click event and markers are not added with any other identifying attributes (such as an ID or Name) which could be used to hook an event into them. After a quick squiz at the Terms and Conditions, I grabbed a copy of the API javascript and after unobfuscating it (basically just adding back in line breaks and running in through indent(1)) started to have a squiz at what was happening. The API code is very nice and clean and it wasn’t at all hard to work out what was going on. Mozilla’s Venkman javascript debugger absolutely rocks for this sort of work. It allows you to step through all the scripts on the page line by line and quickly get a feel for the flow of the code. Adding support for the hover (onmouseover) event doesn’t look like it would be too hard, but the T&C didn’t explicitly mention whether I was allowed to actually modify the API code, so I choose to simply create a CRCnetMarker class (see below) that creates GMarker object and then pokes id attributes onto the internal objects of that class before returning it. Then once you’ve called addOverlay you can use the basic DOM functions to hook in an onmouseover event handler on the ID of the marker.
var CRCnetMarker = function (sitename, point, icon)
 
    var marker = new GMarker(point, icon);
    marker.icon.id = "crcnet_container_" + sitename;
    marker.icon.firstChild.id = "crcnet_icon_" + sitename;
    marker.addEventListener = function (eventname, handler)  
         b = document.getElementById(marker.icon.firstChild.id);
         b.addEventListener(eventname, handler, false);
     
     return marker;
 
var marker = new CRCnetMarker(new GPoint(2704685.21,6367057.51), "mph");
map.addOverlay(marker);
marker.addEventListener('mouseover', hoverHandler);
That worked like a charm and I now have nice little info boxes popping up next to each node when you hover over them. I also discovered in the process of implementing this that Prototype and ZoomIn do not play nicely together. It appears to be Prototype’s fault as it messes with the Array type by adding new methods (like each for enumeration) which break the builtin for (i in array) syntax when the array contains non-numeric keys (like each). This has been reported in the prototype bug tracker as breaking Yahoo Maps but the comments don’t seem to offer much hope for Prototype’s behaviour changing anytime soon which is a pity. The next roadblock was a lack of support for drawing lines! This put the kibosh on the whole animated network status idea. Trolling through the source again reveals a GPolyLine class that appears to be semi-implemented, so hopefully support for drawing lines is coming very soon. The only other thing I found slightly annoying was having to supply all the co-ordinates in NZGD49 format. Most of our GPS information for CRCnet is stored in WGS84 format (as that’s what our older GPS unit puts out). NZGD49 has been deprecated in favour of NZGD2000 (which is near enough to identical to WGS84 for normal use) so I’m not really sure why ZoomIn is still using NZGD49. Maybe that’s all TerraLink can supply? Some quick googling turned up the proj library (apt-get install proj in Debian) which provides the cs2cs utility which can convert from WGS84 (aka NZGD2000) to NZGD49 to feed to ZoomIn. The magic peice of information is the transformation parameters which LINZ helpfully provide. Punching those (I chose the 7 parameter version) into cs2cs via a command line like

echo "$lat $long" /usr/bin/cs2cs +proj=latlong +datum=WGS84 +ellps=WGS84 +towgs84=0,0,0 +nodefs +to +proj=nzmg +datum=nzgd49 +ellps=intl +towgs84=59.47,-5.04,187.44,-0.47,0.10,-1.024,-4.5993 awk ' print $1" "$2 '

performs the magic conversion. The conclusion?
Overall I think ZoomIn rocks and it’s really cool to see a small NZ company filling in where Google and friends have failed miserably. The biggest weakness at the moment is definitely the API, which doesn’t really allow you to do much more than add points at this stage. Given that ZoomIn is still relatively young the leanness off the API is understandable. Given the API a few more months to mature and fill out and I think we’ll be able to create some really cool applications on top of the ZoomIn platform. Screenshot:

14 May 2006

Matt Brown: SSC OSS Legal Guide v2

The State Services Commission has released a second version of their Legal Guide to Open Source Software that fixes the complaints that NZOSS had with the original document. I’m very pleased with this outcome and its great that we as a community could work constructively with the Government on this issue. Here’s hoping that the relationships that have been built through this exercise can continue to be built upon to further increase the mindshare the Free and Open Source Software has inside the Government.

3 May 2006

Matt Brown: Local Loop Unbundling

Wow! It’s out. The government has finally gained the courage to force Telecom to unbundle the local loop. Possibly the most interesting part of the whole announcement is the circumstances of it. Cabinet signed off on it this morning, by midday it had been leaked to Telecom and the Government was forced to scramble to announce it to everyone to avoid regulatory problems from the Sharemarket. No doubt whoever leaked it is feeling very very worried about their job security right now! The Cabinet briefing paper and minutes that accompanied the original press release are a bad quality scan and have obviously been prepared very quickly with hand written corrections to the page
numbers in the latter half of the document. The document has now been removed from the website, which I guess means that it is being touched up. Email me if you want a copy. [Update 1am: It’s now back, but has had information redacted presumably because it is meant to be commercially sensitive!] If you want the hard facts the following are good sources: So, is this a good a thing? I think it is in the long term, but the best part of today’s announcement is what goes alongside the LLU decision, not the LLU itself. More on that in a minute. The Cabinet paper (60 pages) appears to be a very thorough summary of the detailed analysis that has obviously been performed over the previous months. What I think is a fairly reasonable argument is made for why the action is needed and why the chosen course of action is the best option. I’m very impressed with David Cunliffe’s handling of the portfolio and I hope that he continues to work to the high standard that he’s set himself in the few short months since the election. He’s certainly going to face some opposition now! One thing to keep in mind in reading the paper and analysing the information available is that the Government’s hand was forced and there is probably a lot of implementation detail that they would have planned to release with the offical announcement on the 18th. So theres not much point in nit-picking details at this stage. The meat of the announcement is that the Government has chosen a two-stage approach to increase the regulation in the sector. The two stages are basically short and long term actions designed to be complementary. The long term action is the LLU itself and promotion of investment in alternative infrastructure. Because this will not be completed and ready to use in any shape or form until 2008 at the earliest there are also a series of measures to beef up the current wholesale offerings to tide us over and improve the market in the interim. What we get in the short-term: With prices and access terms for all these services to be set by the Commerce Commision, which is directed to ensure that the pricing is applied to protect investment incentives. There are a few juicy paragraphs that suggest the Government has considered opposition and is ready for it, such as 113 that warns that is Telecom does not invest quickly enough then full structural separation will be considered. However there are also suggestions (para 113) that Telecom is going to be thrown a bone in the form of a recalculated TSO that will provide higher levels of compensation to support the necessary network upgrades in rural areas. Rodney Hide and no doubt other right-wing groups are already bleating about the “stolen” property rights. But as I see it thats not the case. Telecom still owns the local loop. ISPs do not get to use the copper for free! They must pay Telecom a market rent. LLU is about regulating that price and forcing Telecom to offer the service where there is no incentive for Telecom to do otherwise given their vertically integrated monopoly. Secondly, Telecom was given a chance to avoid this scenario with the 2003 decision not to unbundle. At that point Telecom and its shareholders were warned that if they didn’t invest appropriately and provide a competitive wholesale market the situation would be reviewed. These warnings increased in frequency over the past year. Telecom cannot say that they were not warned. They may have massively misjudged the government, but they were warned. The simple fact is that over the last three years Telecom has played games and stalled as much as possible. In many ways I feel they are the authors of their own misfortune. Overall, I’m very happy and excited. Its a great time to be involved in the Internet industry in New Zealand. The government has churned out an excellent paper that shows that they have carefully analysed the issues and decide on a course of action that I think has an excellent chance of improving the state of broadband in New Zealand. LLU is not viewed as a magic bullet, other measures have been put in place to ease its introduction and investment incentives to build the next generation of access infrastructure are mentioned and regarded as important. The big risk to watch is how the Government goes with the implementation. It is a hugely complex peice of policy to implement, with lots of potential for mistake and they are fighting against a well resourced company with lots to loose. No doubt this will not be my last post on the issue.

18 April 2006

Matt Brown: Desktop with Dual-DVI

I recently aquired a new desktop for work which came with a 17″ Philips LCD screen. Given that I work from home and I already owned a 17″ LCD the scene was set for a nice dual-head setup, or so I thought. Having two LCD monitors side by side, I wanted them both to be running on DVI. This was not to prove an easy task. The first hurdle was finding a video card with Dual-DVI output for a reasonable price. This was made even harder by the fact that I was originally looking for a AGP card to avoid having to upgrade the entire motherboard. Eventually we gave up on this option and I ended up getting an HP dc7600 mini tower with a PCI-e slot to use for my desktop. We also ordered a Power Color card based on the Radeon X1300 chipset to satisfy the Dual-DVI requirement. Unfortunately when the machine arrived (around the end of March) I discovered that the X1300 was not yet supported by the open source X.org drivers and, even worse, there was no support for it in the latest fglrx release from ATI either! This was sloppy checking on my part really. I’ve become so used to hardware “just working” thanks to Ubuntu that I didn’t even consider that the card might not be supported. I filed a support request with ATI asking when support for the X1300 was expected and received the standard “Linux isn’t supported, not aware of any upcoming revisions, we’ll support it when we get around to it” style response. Eventually we found an Radeon X850 based Dual-DVI PCI-e card for around twice the price of the X1300 and ordered that. I limped along with a single screen using the VESA driver until the card arrived. Initial impressions with the X850 card were not promising. Although it is supported by the radeon driver in X.org 7.0 there seems to be a bug that prevents it from working correctly in Dual-head mode. There are numerous tickets in the X.org bugzilla and the problem appears to be solved if you use DVI+VGA, but its still broken for DVI+DVI unfortunately. Next step was to try the fglrx drivers to see if I could get Dual-head working with them. More problems! fglrx doesn’t play nicely with Xen (I run my desktop in a Xen dom0 with other testing domains (for sid, etc) in domU’s). After reverting back to the standard Ubuntu kernel (eg. no Xen) I’ve managed to get a nice Dual-DVI Dual-Head setup working with the X850. However, in the process of installing the fglrx drivers I discovered that there had been a new release made on the 12th of April that among other changes added support for the X1300 chipset!!! Needless to say I’m not particularly impressed with the response from ATI’s helpdesk, given that only one week prior, they had disclaimed any knowledge of when support for the X1300 chipset would be available. As it turns out, its a bit of a non-issue anyway as I couldn’t make the X1300 work properly in a Dual-DVI setup. The secondary screen suffered all sorts of corruption and ghosting that I couldn’t work out how to remove. Running the second screen in VGA mode (via a DVI-VGA adaptor) was fine, but with a drop in quality due to the VGA connection. So after several months of waiting, almost a day of configuration twiddling and the loss of Xen. I finally have a Dual-head, DVI based desktop setup. Its very nice, but the cost is certainly high. I can’t believe that Dual-DVI doesn’t seem to be widely used, am I really that fussy about my monitors? Now that I’ve had to give up Xen, I’m trying to get VMware setup to do the job, but I’m already running in to problems there. VMware refuses to use LVM devices for raw disk access and the LD_PRELOAD solution provided by vmware-bdwrapper is not playing nicely with dapper’s multiarch setup at the moment. Hopefully with another days worth of twiddling I can get it working!

13 April 2006

Matt Brown: 2.6 Suckiness

Has anyone else noticed a sharp decline in the quality and stability of the 2.6 kernels recently? I know that they’ve supposedly done away with “stable” releases and changes are being merged from the development trees straight into the 2.6 mainline, but surely we can still expect some level of stability. I’ve run into some fairly major problems lately. In early December I tried to upgrade our Biscuit PC distributions kernel package from 2.6.13.1 to 2.6.14 but was stymied by the fact that 2.6.14 seems to have completely removed support for building out of kernel modules against the kernel headers only, the full kernel source is now required to be present. This does not play nicely with Debian kernel packages. I haven’t had time to try 2.6.15 yet to see if this change has been reverted, but its a fairly major thing to drop support for and completely removes the ability for me to build packages for external drivers like madwifi. Today I was upgrading a machine with software raid from Woody -> Sarge and proceeded to upgrade the kernel too. I pulled down a copy of 2.6.15.4, ran make oldconfig and then verified that all the crucial options were still selected (raid, promise controller, e100, etc). make-kpkg, dpkg -i, reboot. BANG.

VFS: Cannot open root device "md0" or unknown-block(9,0)
Please append a correct "root=" boot option
Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(9,0)

A quick investigation revealed the promise controller as the source of the problem

PDC20267: IDE controller at PCI slot 0000:02:0e.0
PDC20267: chipset revision 2
PDC20267: ROM enabled at 0xf4640000
PDC20267: 100% native mode on irq 22
PDC20267: neither IDE port enabled (BIOS)

Now, the Promise controllor is certainly not disabled in BIOS, and 2.4.21 manages to boot off it just fine so what is wrong with 2.6.15!? Google suggested that I needed to enable the PDC202xx_FORCE option, (which was enabled in 2.4.21 but in 2.6.15 is only available if you happen to select the completely unrelated driver for the next model up promise card to what I have - a bug that has apparently been acknowledged since 2.6.9 but still not fixed!!), however that made no difference whatsoever. After a couple of hours of fiddling around with various kernel settings and boot options (did I mention this machine is 200km away from me…) I gave up and started trying early kernel versions starting with the existing 2.6.13 kernel I was using for the Biscuit PC distribution, which worked first time. So far I’ve not been able to find any other reports of problems with 2.6.15 and the Promise drivers, but from what I saw today they are completely non functional. The common argument I seem to hear when I usually mention this subject (2.6 suckiness) is “but you should just use your distributions kernel”, to which my reply is “but I do…”, for my desktop. The standard Ubuntu kernels work great and I have had no reason to meddle with them. However my desktop needs are a far cry from what I need to be able to do for the servers I maintain at work, or the kernel packages that I need to build for our BiscuitPC distribution. I’m certainly not one of those gentoo type people that need the latest and greatest the instant that it comes out, and I don’t try and run the latest kernel just for the sake of it. But the fact is, new kernels bring new features (not to mention bugfixes and security patches) that are frequently needed, especially in the networking and wireless areas. When I have to spend several days debugging and testing every time we need a new kernel only to find some critical flaw that prevents us from using it I have to stop and wonder whether the current kernel development process is working. Have others had similar experiences with 2.6 kernels recently, is the fact that the latest kernel-image I can find in sid is 2.6.8 confirmation that it’s not just me, or am I really just overly angry due to the hours I’ve spent battling 2.6 lately? Update: Several people have pointed out that I failed to look for linux-image instead of kernel-image and that 2.6.15 is already in sid. My mistake. And yes, I do plan to file bugs, not just rant, but sometimes you need to vent first :)

12 April 2006

Matt Brown: Loading GPG / SSH Keys from a USB Key, Round 2

Back in January I talked about setting up some scripts to automatically load ssh/gpg keys into the appropriate agents when you plugged in a UBS key. I had quite a number of people ask me for my scripts, but they just weren’t quite ready. I’m still not entirely happy with the solution that I’ve come up with, but I figure its working well enough to get some feedback now. It’s based very heavily on the usb-storage script originally written by Sean Finney, so I think that means I owe him Pizza now. However while its based on the usb-storage script it has changed in a few major ways: The mounting of the partition is the key change and I’m still tossing up whether the way I’m doing it is best or whether I should return to having it handled by the script. The primary reason for changing it was to allow the partition to be mounted in a stable location (as opposed to a random directory under /var/tmp) so that I could symlink from appropriate places in my home directory to the partition on the key. The symlinking is needed to keep GPG happy as the gpg-agent seems to store only the passphrase and requires access to the private key whenever you need to sign/encrypt something. The way ssh-agent works is much nicer in this respect, in that once you’ve loaded a key into the agent it doesn’t need to refer to it on disk again. Currently I’m using autofs to mount the partition as needed and this seems to be working well. It’s probably possible to go back to mounting the partition at a stable location from within the script without too much hassle. You can grab the script from http://www.mattb.net.nz/debian/misc/manage-keys The remaining details for my configuration are below: First, setup udev to rename the key partitions to a static name and then fire the script at the appropriate times
/etc/udev/rules.d/usbkey.rules

ACTION=="add", KERNEL=="sd?2", SYSFS serial ="A0494386139B005B", NAME="%k", SYMLINK="usbkeys", RUN+="/usr/local/bin/manage-keys"
ACTION=="remove", KERNEL=="sd?2", RUN+="/usr/local/bin/manage-keys"
Then setup autofs to mount the partition on demand
/etc/auto.master

/media/usb /etc/auto.usbkey --timeout=10

/etc/auto.usbkey

keys -fstype=ext3,ro,noatime,nosuid,nodev :/dev/usbkeys
I keep only id_dsa and secring.gpg on the key and symlink from the appropriate places in my homedir to /media/usb/keys/

matt@argon:~$ ls -l .ssh/
total 76
-rw------- 1 matt matt 612 2006-04-12 22:45 authorized_keys
-rw-r--r-- 1 matt matt 2694 2006-04-12 22:46 config
lrwxrwxrwx 1 matt matt 22 2006-04-13 01:08 id_dsa -> /media/usb/keys/id_dsa
-rw-r--r-- 1 matt matt 612 2006-04-12 22:46 id_dsa.pub
-rw-r--r-- 1 matt matt 58851 2006-04-12 23:10 known_hosts
matt@argon:~$ ls -l .gnupg/
total 2336
-rw-r--r-- 1 matt matt 126 2006-04-12 22:56 gpg.conf
drwx------ 2 matt matt 4096 2006-04-12 23:07 private-keys-v1.d
-rw------- 1 matt matt 1175737 2006-04-12 23:29 pubring.gpg
-rw------- 1 matt matt 600 2006-04-13 01:51 random_seed
lrwxrwxrwx 1 matt matt 27 2006-04-13 01:08 secring.gpg -> /media/usb/keys/secring.gpg
-rw------- 1 matt matt 10560 2006-04-12 23:27 trustdb.gpg
And that’s basically it. The script takes care of the rest. The main problem I’m having with the script at the moment is that it doesn’t autolock the screen when you remove the key because gnome-screensaver-command is lacking the necessary environment variables to find the DBUS socket it needs to talk to its backend. Need to read up on DBUS/gnome-screensaver and sort out how to fix that tommorrow. Update: Updated example udev config so it doesn’t run a script out of /home

7 April 2006

Matt Brown: Porting code to Windows

I’ve set myself the task of porting libtrace to Windows, so that Perry and I can use it in an upcoming project that we’ve been scheming for a long time. It’s going to be a reasonably large job I think! For a start its going to involve relearning all the MSVC oddities that I knew back when I last wrote code for Windows (over 4 years ago) and then there is all the stupid compatibility issues between various header files and type definitions to work out. I haven’t been able to find a good document that lists all the common problems that you run into porting a GNU library to Windows which surprised me somewhat. There are plenty of documents about how to port code from Windows to Linux however. If anyone knows of some handy resources I would love to hear about them. Cygwin/Mingw32 etc are not suitable for this project.

14 March 2006

Matt Brown: Response to the SSC Legal Guide on OSS

I promised that I would post again stating how I thought NZOSS should respond to the SSC legal guide. Instead I got caught up helping Peter actually draft the response. I was only involved in the first few drafts before I had to turn back to real work, however Peter and many of the other NZOSS participants finished off the document. I think the final product is slightly more verbose and touches on more points that I would have raised personally, but I do think it sets the right tone overall and certainly won’t do us any harm. This seems to have been justified by the news today the the SSC has offered to fly two representatives of the NZOSS down to Wellington on Friday to meet and discuss the document. I would have liked to have gone, but given that I’m going to be attending NZNOG for most of next week I don’t think I can really justify the time off work. It would be good to link to the actual response itself about here, but as far as I’m aware its not up on the NZOSS website yet. There is an earlier draft at http://www.devcentre.org/ssc-response-legal-guide-oss-2006-03-08.pdf which is very close to the final in content. I think only grammar and minor formatting was changed after this point. Now we sit back and wait to see what the SSC will do after the discussion on Friday. All in all I think this whole episode has underscored how healthy the level of open source support within the NZ government is.

Matt Brown: SSC - Open Source Legal Guidelines

The State Services Commission (SSC) (the NZ government body that oversees NZ government departments) released a guide to legal issues in using Open Source Software recently. The publication of this guide has caused a certain amount of consternation in the open source community and even manged to make it to Slashdot and Groklaw as well as spawning a fairly active thread on the NZ Open Source Society’s openchat mailing list. Through all of this I’ve found myself very much on the opposite side of opinion to most other people commenting on it. My initial thoughts on the document are quite nicely summarised by the Groklaw comment that Stuart points out. While the document uses some unfortunate language, there is nothing in it that is actually untrue. In fact my initial feelings towards the document (without having read it thoroughly) were that it was good to see the SSC taking enough notice of open source to feel the need to advise departments on the particular issues that it can bring up. Particularly dissapointing to me are the conspiracy theories that have been flying around. Groklaw surmises that the paper is a Microsoft hatchet job simply because it was authored by Chapman Tripp (a large NZ law firm) who happen to also represent Microsoft NZ on a range of intellectual property issues. I’ve yet to hear a single shred of evidence to back up this assertion and I find it quite ridiculous. Large firms working in a small market such as NZ will often run into conflict of interest issues and have well established procedures for dealing with them. Without seeing evidence to the contrary I cannot accept that a reputable firm like Chapman Tripp would risk their reputation by trying to intentionally mislead the goverment in a report like this. A far more like scenario, to my mind, is simply that the author(s) of the report are not very familiar with open source issues and didn’t do quite as much research as they should have to bring themselves up to speed with the unique IP features of open source licenses. Having read the document more thoroughly now, I can agree that it is far from perfect. I think the single most glaring problem is that it focusses only on open source software and doesn’t mention that many of the risks also exist when using proprietary software. This could leave an uneducated reader completely scared off open source due to a mistaken belief that it is far too risky. On the other hand we shouldn’t completely dismiss the document either as it does raise a number of clear risks that a department will face in adding open source software to their IT environment. We’re not going to do ourselves any favours by pretending that open source is a panacea to all the worlds software ills and can just be dropped in to any IT environment to make things better! It’s worth pointing out at this point that previous evidence from the SSC indicates that they are open source friendly. They have a very clear policy stating that Departments are encouraged to consider open source software alongside proprietary software and use it in cases where it wins on the basis of cost, functionality, interoperability, and security. Given this history, I am alarmed at the hostile reactions many participants on the NZOSS open-chat mailing list had to the document. Quite frankly I think some people are responding far too emotively to the language in the document (which is problematic - but not that bad) and missing the chance to evaluate the rest of it in an objective manner. We have a SSC that has indicated a friendliness towards open source in the past, and I don’t think that they would turn around and reverse their position in this manner. This report is intended to support their overall policy after all. Rather than jumping down their throats and shouting about how poor the document is we need to engage in a civil dialog, point out the issues with the document and offer constructive suggestions for how the document can be improved. To that end, Peter Harrisson has started some pages on the NZOSS wiki where we are going to co-ordinate our impressions and responses to the document so we can present a consistent NZOSS position to the SSC in response. I’d encourage everyone with an interest in the document to read and contribute to those pages. As for exactly what I form our response should take and what it should consist of? I’ll follow up with another post on that later in the weekend when I’ve had a bit more time to reflect.
What I do know (if you haven’t guessed already) is that I think we should be very careful not to respond in a hostile and emotive manner that will result in our relationship with the SSC becoming worse. That means we should take time to consider the document objectively, evaluate its propositions and provide a well researched and substantiated list of suggested improvements to the SSC.

Next.

Previous.